11,840 research outputs found

    Frustrated Heisenberg antiferromagnets: fluctuation induced first order vs deconfined quantum criticality

    Get PDF
    Recently it was argued that quantum phase transitions can be radically different from classical phase transitions with as a highlight the 'deconfined critical points' exhibiting fractionalization of quantum numbers due to Berry phase effects. Such transitions are supposed to occur in frustrated ('J1J_1-J2J_2') quantum magnets. We have developed a novel renormalization approach for such systems which is fully respecting the underlying lattice structure. According to our findings, another profound phenomenon is around the corner: a fluctuation induced (order-out-of-disorder) first order transition. This has to occur for large spin and we conjecture that it is responsible for the weakly first order behavior recently observed in numerical simulations for frustrated S=1/2S=1/2 systems.Comment: 7 pages, 3 Figures, submitted to EP

    Nonlinearities and cyclical behavior: the role of chartists and fundamentalists

    Get PDF
    We develop a behavioral exchange rate model with chartists and fundamentalists to study cyclical behavior in foreign exchange markets. Within our model, the market impact of fundamentalists depends on the strength of their belief in fundamental analysis. Estimation of a STAR GARCH model shows that the more the exchange rate deviates from its fundamental value, the more fundamentalists leave the market. In contrast to previous findings, our paper indicates that due to the nonlinear presence of fundamentalists, market stability decreases with increasing misalignments. A stabilization policy such as central bank interventions may help to deflate bubbles

    Learning semantic sentence representations from visually grounded language without lexical knowledge

    Get PDF
    Current approaches to learning semantic representations of sentences often use prior word-level knowledge. The current study aims to leverage visual information in order to capture sentence level semantics without the need for word embeddings. We use a multimodal sentence encoder trained on a corpus of images with matching text captions to produce visually grounded sentence embeddings. Deep Neural Networks are trained to map the two modalities to a common embedding space such that for an image the corresponding caption can be retrieved and vice versa. We show that our model achieves results comparable to the current state-of-the-art on two popular image-caption retrieval benchmark data sets: MSCOCO and Flickr8k. We evaluate the semantic content of the resulting sentence embeddings using the data from the Semantic Textual Similarity benchmark task and show that the multimodal embeddings correlate well with human semantic similarity judgements. The system achieves state-of-the-art results on several of these benchmarks, which shows that a system trained solely on multimodal data, without assuming any word representations, is able to capture sentence level semantics. Importantly, this result shows that we do not need prior knowledge of lexical level semantics in order to model sentence level semantics. These findings demonstrate the importance of visual information in semantics
    • …
    corecore